23 research outputs found
High Performance Reference Counting and Conservative Garbage Collection
Garbage collection is an integral part of modern programming languages. It automatically
reclaims memory occupied by objects that are no longer in use. Garbage
collection began in 1960 with two algorithmic branches — tracing and reference counting.
Tracing identifies live objects by performing a transitive closure over the object
graph starting with the stacks, registers, and global variables as roots. Objects not
reached by the trace are implicitly dead, so the collector reclaims them. In contrast,
reference counting explicitly identifies dead objects by counting the number of incoming
references to each object. When an object’s count goes to zero, it is unreachable
and the collector may reclaim it.
Garbage collectors require knowledge of every reference to each object, whether
the reference is from another object or from within the runtime. The runtime provides
this knowledge either by continuously keeping track of every change to each reference
or by periodically enumerating all references. The collector implementation faces two
broad choices — exact and conservative. In exact garbage collection, the compiler and
runtime system precisely identify all references held within the runtime including
those held within stacks, registers, and objects. To exactly identify references, the
runtime must introspect these references during execution, which requires support
from the compiler and significant engineering effort. On the contrary, conservative
garbage collection does not require introspection of these references, but instead
treats each value ambiguously as a potential reference.
Highly engineered, high performance systems conventionally use tracing and
exact garbage collection. However, other well-established but less performant systems
use either reference counting or conservative garbage collection. Reference counting has
some advantages over tracing such as: a) it is easier implement, b) it reclaims memory
immediately, and c) it has a local scope of operation. Conservative garbage collection
is easier to implement compared to exact garbage collection because it does not
require compiler cooperation. Because of these advantages, both reference counting
and conservative garbage collection are widely used in practice. Because both suffer
significant performance overheads, they are generally not used in performance critical
settings. This dissertation carefully examines reference counting and conservative
garbage collection to understand their behavior and improve their performance.
My thesis is that reference counting and conservative garbage collection can perform
as well or better than the best performing garbage collectors.
The key contributions of my thesis are: 1) An in-depth analysis of the key design
choices for reference counting. 2) Novel optimizations guided by that analysis that
significantly improve reference counting performance and make it competitive with
a well tuned tracing garbage collector. 3) A new collector, RCImmix, that replaces
the traditional free-list heap organization of reference counting with a line and block heap structure, which improves locality, and adds copying to mitigate fragmentation.
The result is a collector that outperforms a highly tuned production generational
collector. 4) A conservative garbage collector based on RCImmix that matches the
performance of a highly tuned production generational collector.
Reference counting and conservative garbage collection have lived under the
shadow of tracing and exact garbage collection for a long time. My thesis focuses
on bringing these somewhat neglected branches of garbage collection back to life
in a high performance setting and leads to two very surprising results: 1) a new
garbage collector based on reference counting that outperforms a highly tuned production
generational tracing collector, and 2) a variant that delivers high performance
conservative garbage collection
Down for the Count? Getting Reference Counting Back in the Ring
Reference counting and tracing are the two fundamental approaches that have underpinned garbage collection since 1960. However, despite some compelling advantages, reference counting is almost completely ignored in implementations of high performance systems today. In this paper we take a detailed look at reference counting to understand its behavior and to improve its performance. We identify key design choices for reference counting and analyze how the behavior of a wide range of benchmarks might affect design decisions. As far as we are aware, this is the first such quantitative study of reference counting. We use insights gleaned from this analysis to introduce a number of optimizations that significantly improve the performance of reference counting. We find that an existing modern implementation of reference counting has an average 30% overhead compared to tracing, and that in combination, our optimizations are able to completely eliminate that overhead. This brings the performance of reference counting on par with that of a well tuned mark-sweep collector. We keep our in-depth analysis of reference counting as general as possible so that it may be useful to other garbage collector implementers. Our finding that reference counting can be made directly competitive with well tuned mark-sweep should shake the community's prejudices about reference counting and perhaps open new opportunities for exploiting reference counting's strengths, such as localization and immediacy of reclamation
BanglaNLG and BanglaT5: Benchmarks and Resources for Evaluating Low-Resource Natural Language Generation in Bangla
This work presents BanglaNLG, a comprehensive benchmark for evaluating
natural language generation (NLG) models in Bangla, a widely spoken yet
low-resource language. We aggregate six challenging conditional text generation
tasks under the BanglaNLG benchmark, introducing a new dataset on dialogue
generation in the process. Then, using a clean corpus of 27.5 GB of Bangla
data, we pretrain BanglaT5, a sequence-to-sequence Transformer model for
Bangla. BanglaT5 achieves state-of-the-art performance in all of these tasks,
outperforming several multilingual models by up to 9% absolute gain and 32%
relative gain. We are making the new dataset, the BanglaT5 language model, and
a leaderboard publicly available at https://github.com/csebuetnlp/BanglaNLG in
the hope of advancing future research and evaluation on Bangla NLG.Comment: Accepted at the Findings of EACL 202
CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs
We present CrossSum, a large-scale cross-lingual abstractive summarization
dataset comprising 1.7 million article-summary samples in 1500+ language pairs.
We create CrossSum by aligning identical articles written in different
languages via cross-lingual retrieval from a multilingual summarization
dataset. We propose a multi-stage data sampling algorithm to effectively train
a cross-lingual summarization model capable of summarizing an article in any
target language. We also propose LaSE, a new metric for automatically
evaluating model-generated summaries and showing a strong correlation with
ROUGE. Performance on ROUGE and LaSE indicate that pretrained models fine-tuned
on CrossSum consistently outperform baseline models, even when the source and
target language pairs are linguistically distant. To the best of our knowledge,
CrossSum is the largest cross-lingual summarization dataset and the first-ever
that does not rely solely on English as the pivot language. We are releasing
the dataset, alignment and training scripts, and the models to spur future
research on cross-lingual abstractive summarization. The resources can be found
at https://github.com/csebuetnlp/CrossSum
XL-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages
Contemporary works on abstractive text summarization have focused primarily
on high-resource languages like English, mostly due to the limited availability
of datasets for low/mid-resource ones. In this work, we present XL-Sum, a
comprehensive and diverse dataset comprising 1 million professionally annotated
article-summary pairs from BBC, extracted using a set of carefully designed
heuristics. The dataset covers 44 languages ranging from low to high-resource,
for many of which no public dataset is currently available. XL-Sum is highly
abstractive, concise, and of high quality, as indicated by human and intrinsic
evaluation. We fine-tune mT5, a state-of-the-art pretrained multilingual model,
with XL-Sum and experiment on multilingual and low-resource summarization
tasks. XL-Sum induces competitive results compared to the ones obtained using
similar monolingual datasets: we show higher than 11 ROUGE-2 scores on 10
languages we benchmark on, with some of them exceeding 15, as obtained by
multilingual training. Additionally, training on low-resource languages
individually also provides competitive performance. To the best of our
knowledge, XL-Sum is the largest abstractive summarization dataset in terms of
the number of samples collected from a single source and the number of
languages covered. We are releasing our dataset and models to encourage future
research on multilingual abstractive summarization. The resources can be found
at \url{https://github.com/csebuetnlp/xl-sum}.Comment: Findings of the Association for Computational Linguistics, ACL 2021
(camera-ready
Write-rationing garbage collection for hybrid memories
Emerging Non-Volatile Memory (NVM) technologies offer high capacity and energy efficiency compared to DRAM, but suffer from limited write endurance and longer latencies. Prior work seeks the best of both technologies by combining DRAM and NVM in hybrid memories to attain low latency, high capacity, energy efficiency, and durability. Coarse-grained hardware and OS optimizations then spread writes out (wear-leveling) and place highly mutated pages in DRAM to extend NVM lifetimes. Unfortunately even with these coarse-grained methods, popular Java applications exact impractical NVM lifetimes of 4 years or less.
This paper shows how to make hybrid memories practical, without changing the programming model, by enhancing garbage collection in managed language runtimes. We find object write behaviors offer two opportunities: (1) 70% of writes occur to newly allocated objects, and (2) 2% of objects capture 81% of writes to mature objects. We introduce writerationing garbage collectors that exploit these fine-grained behaviors. They extend NVM lifetimes by placing highly mutated objects in DRAM and read-mostly objects in NVM. We implement two such systems. (1) Kingsguard-nursery places new allocation in DRAM and survivors in NVM, reducing NVM writes by 5x versus NVM only with wear-leveling. (2) Kingsguard-writers (KG-W) places nursery objects in DRAM and survivors in a DRAM observer space. It monitors all mature object writes and moves unwritten mature objects from DRAM to NVM. Because most mature objects are unwritten, KG-W exploits NVM capacity while increasing NVM lifetimes by 11x. It reduces the energy-delay product by 32% over DRAM-only and 29% over NVM-only. This work opens up new avenues for making hybrid memories practical
GEMv2 : Multilingual NLG benchmarking in a single line of code
Evaluation in machine learning is usually informed by past choices, for example which datasets or metrics to use. This standardization enables the comparison on equal footing using leaderboards, but the evaluation choices become sub-optimal as better alternatives arise. This problem is especially pertinent in natural language generation which requires ever-improving suites of datasets, metrics, and human evaluation to make definitive claims. To make following best model evaluation practices easier, we introduce GEMv2. The new version of the Generation, Evaluation, and Metrics Benchmark introduces a modular infrastructure for dataset, model, and metric developers to benefit from each others work. GEMv2 supports 40 documented datasets in 51 languages. Models for all datasets can be evaluated online and our interactive data card creation and rendering tools make it easier to add new datasets to the living benchmark.Peer reviewe
GEMv2 : Multilingual NLG benchmarking in a single line of code
Evaluation in machine learning is usually informed by past choices, for example which datasets or metrics to use. This standardization enables the comparison on equal footing using leaderboards, but the evaluation choices become sub-optimal as better alternatives arise. This problem is especially pertinent in natural language generation which requires ever-improving suites of datasets, metrics, and human evaluation to make definitive claims. To make following best model evaluation practices easier, we introduce GEMv2. The new version of the Generation, Evaluation, and Metrics Benchmark introduces a modular infrastructure for dataset, model, and metric developers to benefit from each others work. GEMv2 supports 40 documented datasets in 51 languages. Models for all datasets can be evaluated online and our interactive data card creation and rendering tools make it easier to add new datasets to the living benchmark.Peer reviewe